21 research outputs found

    "Hey Model!" -- Natural User Interactions and Agency in Accessible Interactive 3D Models

    Full text link
    While developments in 3D printing have opened up opportunities for improved access to graphical information for people who are blind or have low vision (BLV), they can provide only limited detailed and contextual information. Interactive 3D printed models (I3Ms) that provide audio labels and/or a conversational agent interface potentially overcome this limitation. We conducted a Wizard-of-Oz exploratory study to uncover the multi-modal interaction techniques that BLV people would like to use when exploring I3Ms, and investigated their attitudes towards different levels of model agency. These findings informed the creation of an I3M prototype of the solar system. A second user study with this model revealed a hierarchy of interaction, with BLV users preferring tactile exploration, followed by touch gestures to trigger audio labels, and then natural language to fill in knowledge gaps and confirm understanding.Comment: Paper presented at ACM CHI 2020: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, ACM, New York, April 2020; Replacement: typos correcte

    Review of quantitative empirical evaluations of technology for people with visual impairments

    Get PDF
    Addressing the needs of visually impaired people is of continued interest in Human Computer Interaction (HCI) research. Yet, one of the major challenges facing researchers in this field continues to be how to design adequate quantitative empirical evaluation for these users in HCI. In this paper, we analyse a corpus of 178 papers on technologies designed for people with visual impairments, published since 1988, and including at least one quantitative empirical evaluation (243 evaluations in total). To inform future research in this area, we provide an overview, historic trends and a unified terminology to design and report quantitative empirical evaluations. We identify open issues and propose a set of guidelines to address them. Our analysis aims to facilitate and stimulate future research on this topic

    Two Decades of Touchable and Walkable Virtual Reality for Blind and Visually Impaired People: A High-Level Taxonomy

    No full text
    Although most readers associate the term virtual reality (VR) with visually appealing entertainment content, this technology also promises to be helpful to disadvantaged people like blind or visually impaired people. While overcoming physical objects’ and spaces’ limitations, virtual objects and environments that can be spatially explored have a particular benefit. To give readers a complete, clear and concise overview of current and past publications on touchable and walkable audio supplemented VR applications for blind and visually impaired users, this survey paper presents a high-level taxonomy to cluster the work done up to now from the perspective of technology, interaction and application. In this respect, we introduced a classification into small-, medium- and large-scale virtual environments to cluster and characterize related work. Our comprehensive table shows that especially grounded force feedback devices for haptic feedback (‘small scale’) were strongly researched in different applications scenarios and mainly from an exocentric perspective, but there are also increasingly physically (‘medium scale’) or avatar-walkable (‘large scale’) egocentric audio-haptic virtual environments. In this respect, novel and widespread interfaces such as smartphones or nowadays consumer grade VR components represent a promising potential for further improvements. Our survey paper provides a database on related work to foster the creation process of new ideas and approaches for both technical and methodological aspects

    VEPdgets: Towards Richer Interaction Elements Based on Visually Evoked Potentials

    No full text
    For brain–computer interfaces, a variety of technologies and applications already exist. However, current approaches use visual evoked potentials (VEP) only as action triggers or in combination with other input technologies. This paper shows that the losing visually evoked potentials after looking away from a stimulus is a reliable temporal parameter. The associated latency can be used to control time-varying variables using the VEP. In this context, we introduced VEP interaction elements (VEP widgets) for a value input of numbers, which can be applied in various ways and is purely based on VEP technology. We carried out a user study in a desktop as well as in a virtual reality setting. The results for both settings showed that the temporal control approach using latency correction could be applied to the input of values using the proposed VEP widgets. Even though value input is not very accurate under untrained conditions, users could input numerical values. Our concept of applying latency correction to VEP widgets is not limited to the input of numbers

    Studies on human N-acetyltransferase

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre- DSC:D171983 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Evaluation of Capacitive Markers Fabricated by 3D Printing, Laser Cutting and Prototyping

    No full text
    With Tangible User Interfaces, the computer user is able to interact in a fundamentally different and more intuitive way than with usual 2D displays. By grasping real physical objects, information can also be conveyed haptically, i.e., the user not only sees information on a 2D display, but can also grasp physical representations. To recognize such objects (“tangibles”) it is skillful to use capacitive sensing, as it happens in most touch screens. Thus, real objects can be located and identified by the touch screen display automatically. Recent work already addressed such capacitive markers, but focused on their coding scheme and automated fabrication by 3D printing. This paper goes beyond the fabrication by 3D printers and, for the first time, applies the concept of capacitive codes to laser cutting and another immediate prototyping approach using modeling clay. Beside the evaluation of additional properties, we adapt recent research results regarding the optimized detection of tangible objects on capacitive screens. As a result of our comprehensive study, the detection performance is affected by the type of capacitive signal processing (respectively the device) and the geometry of the marker. 3D printing revealed to be the most reliable technique, though laser cutting and immediate prototyping of markers showed promising results. Based on our findings, we discuss individual strengths of each capacitive marker type

    3D-Druck für blinde Menschen

    No full text

    InclineType: An accelerometer-based typing approach for smartwatches

    No full text
    Small mobile devices such as smartwatches are a rapidly growing market. However, they share the issue of limited input and output space which could impede the success of these devices in future. Hence, suitable alternatives to the concepts and metaphors known from smartphones have to be found. In this paper we present InclineType a tilt-based keyboard input that uses a 3-axis accelerometer for smartwatches. The user may directly select letters by moving his/her wrist and enters them by tapping on the touchscreen. Thanks to the distribution of the letters on the edges of the screen, the keyboard dedicates a low amount of space in the smartwatch. In order to optimize the user input our concept proposes multiple techniques to stabilize the user interaction. Finally, a user study shows that users get familiar with this technique with almost no previous training, reaching speeds of about 6 wpm in average.Peer Reviewe

    Contextual grouping of labels

    No full text
    Learning materials frequently employ illustrations with many labels in order to coordinate visual and textual elements. The labels in illustrations support mainly two search tasks: learners can determine the graphical reference objects of unknown terms or vice versa (i.e. get textual descriptions for unknown visual objects). In traditional print media, human illustrators often overload an illustration with many labels in order to reduce the printing costs. But such illustrations ignore the limitations of human cognitive processes. Based on the chunking principle, we argue, that interactive 3D visualizations should be carefully adopted to the current search tasks. Therefore, we consider semantical relations to select only those labels which are relevant for the current task and emphasize the associated graphical objects. Finally, we present a novel layout algorithm for label grouping to aid learning.

    Annotation of animated 3d objects

    No full text
    This paper presents a novel approach to illustrate dynamic procedures in tutoring materials by analyzing corresponding animations. We propose two strategies: (i) to enhance animations with secondary elements (e.g., textual annotations and arrows) and (ii) to generate visual summaries, i.e. illustrations where secondary elements provide indications of the direction and the extent of moving objects in animations. We propose metrics aiming at an unambiguous and frame-coherent layout in animations. Moreover, we integrated real-time algorithms to layout secondary elements in animations into an interactive 3D browser. In order to test the impact of various conflicting functional and aesthetic layout requirements, our system contains several algorithms and strategies, which are compared in a user study. Finally, this paper presents result of applying our approach to enhance scientific illustrations and technical documentation.
    corecore